Explainable Artificial Intelligence (XAI) Model for Earthquake Spatial Probability Assessment in Arabian Peninsula
نویسندگان
چکیده
Among all the natural hazards, earthquake prediction is an arduous task. Although many studies have been published on hazard assessment (EHA), very few use of artificial intelligence (AI) in spatial probability (SPA). There a great deal complexity observed SPA modeling process due to involvement seismological geophysical factors. Recent shown that insertion certain integrated factors such as ground shaking, seismic gap, and tectonic contacts AI model improves accuracy extent. Because black-box nature models, this paper explores explainable (XAI) SPA. This study aims develop hybrid Inception v3-ensemble extreme gradient boosting (XGBoost) shapely additive explanations (SHAP). The would efficiently interpret recognize factors’ behavior their weighted contribution. work explains specific responsible for importance inventory data were collected from US Geological Survey (USGS) past 22 years ranging magnitudes 5 Mw above. Landsat-8 satellite imagery digital elevation (DEM) also incorporated analysis. Results revealed SHAP outputs align with v3-XGBoost (87.9% accuracy) explanations, thus indicating necessity add new gaps contacts, where absence these makes performs poorly. According interpretations, peak accelerations (PGA), magnitude variation, epicenter density are most critical recent Turkey earthquakes (Mw 7.8, 7.5, 6.7) active east Anatolian fault validate obtained AI-based results. conclusions drawn algorithm depicted relevant, irrelevant, futuristic modeling.
منابع مشابه
Explainable Artificial Intelligence for Training and Tutoring
This paper describes an Explainable Artificial Intelligence (XAI) tool that allows entities to answer questions about their activities within a tactical simulation. We show how XAI can be used to provide more meaningful after-action reviews and discuss ongoing work to integrate an intelligent tutor into the XAI framework.
متن کاملBuilding Explainable Artificial Intelligence Systems
As artificial intelligence (AI) systems and behavior models in military simulations become increasingly complex, it has been difficult for users to understand the activities of computer-controlled entities. Prototype explanation systems have been added to simulators, but designers have not heeded the lessons learned from work in explaining expert system behavior. These new explanation systems a...
متن کاملAutomated Reasoning for Explainable Artificial Intelligence
Reasoning and learning have been considered fundamental features of intelligence ever since the dawn of the field of artificial intelligence, leading to the development of the research areas of automated reasoning and machine learning. This paper discusses the relationship between automated reasoning and machine learning, and more generally between automated reasoning and artificial intelligenc...
متن کاملExplainable Artificial Intelligence via Bayesian Teaching
Modern machine learning methods are increasingly powerful and opaque. This opaqueness is a concern across a variety of domains in which algorithms are making important decisions that should be scrutable. The explainabilty of machine learning systems is therefore of increasing interest. We propose an explanation-byexamples approach that builds on our recent research in Bayesian teaching in which...
متن کاملAn Explainable Artificial Intelligence System for Small-unit Tactical Behavior
As the artificial intelligence (AI) systems in military simulations and computer games become more complex, their actions become increasingly difficult for users to understand. Expert systems for medical diagnosis have addressed this challenge though the addition of explanation generation systems that explain a system’s internal processes. This paper describes the AI architecture and associated...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
ژورنال
عنوان ژورنال: Remote Sensing
سال: 2023
ISSN: ['2315-4632', '2315-4675']
DOI: https://doi.org/10.3390/rs15092248